attentiondropout 0
f1c1592588411002af340cbaedd6fc33-Supplemental.pdf
Figure 2: These two graphs cannot be distinguished by 1-WL-test. The COMBINE step takes the result of AGGREGATE and the previous representation of current node asinput. Wereduce theFFN inner-layer dimension of4din [47] tod, which does not appreciably hurt the performance but significantly save the parameters. The embedding dropout ratio is set to 0.1 by default in many previous Transformer works[11,34]. The rest of hyper-parameters remain unchanged. Table 8 summarizes the hyper-parameters used for fine-tuning Graphormer on OGBGMolPCBA.
2cd2915e69546904e4e5d4a2ac9e1652-Supplemental.pdf
For easier derivation, we have introduced a notation ofqi. Sequence-level prediction This is essentially the case we consider in most of our experiments wherewewanttoobtain avectorial representation oftheinputsequence suchastextclassification. Finally, although we focus on discussion on the NLP tasks in this paper, Funnel-Transformer couldbeapplied toanytasksdealing withsequential data,suchastimeseries andvideostreamanalysis. B.1 Preprocessing&Tokenization For all experiments conducted in this work, we simply adapt the "uncased" word piece model originally used by BERT [2], where the vocabulary size is about 30K. Specifically,wefindthe training can be unstable when the depth goes beyond 24 layers (in the case of B10-10-10H1024) at base scale, especially for the MLM objective.